Goto

Collaborating Authors

 spray painting


FoldPath: End-to-End Object-Centric Motion Generation via Modulated Implicit Paths

Rabino, Paolo, Tiboni, Gabriele, Tommasi, Tatiana

arXiv.org Artificial Intelligence

Object-Centric Motion Generation (OCMG) is instrumental in advancing automated manufacturing processes, particularly in domains requiring high-precision expert robotic motions, such as spray painting and welding. To realize effective automation, robust algorithms are essential for generating extended, object-aware trajectories across intricate 3D geometries. However, contemporary OCMG techniques are either based on ad-hoc heuristics or employ learning-based pipelines that are still reliant on sensitive post-processing steps to generate executable paths. We introduce FoldPath, a novel, end-to-end, neural field based method for OCMG. Unlike prior deep learning approaches that predict discrete sequences of end-effector waypoints, FoldPath learns the robot motion as a continuous function, thus implicitly encoding smooth output paths. This paradigm shift eliminates the need for brittle post-processing steps that concatenate and order the predicted discrete waypoints. Particularly, our approach demonstrates superior predictive performance compared to recently proposed learning-based methods, and attains generalization capabilities even in real industrial settings, where only a limited amount of 70 expert samples are provided. We validate FoldPath through comprehensive experiments in a realistic simulation environment and introduce new, rigorous metrics designed to comprehensively evaluate long-horizon robotic paths, thus advancing the OCMG task towards practical maturity.


3D-CovDiffusion: 3D-Aware Diffusion Policy for Coverage Path Planning

Chen, Chenyuan, Ding, Haoran, Ding, Ran, Liu, Tianyu, He, Zewen, Duan, Anqing, Song, Dezhen, Liang, Xiaodan, Nakamura, Yoshihiko

arXiv.org Artificial Intelligence

Diffusion models, as a class of deep generative models, have recently emerged as powerful tools for robot skills by enabling stable training with reliable convergence. In this paper, we present an end-to-end framework for generating long, smooth trajectories that explicitly target high surface coverage across various industrial tasks, including polishing, robotic painting, and spray coating. The conventional methods are always fundamentally constrained by their predefined functional forms, which limit the shapes of the trajectories they can represent and make it difficult to handle complex and diverse tasks. Moreover, their generalization is poor, often requiring manual redesign or extensive parameter tuning when applied to new scenarios. These limitations highlight the need for more expressive generative models, making diffusion-based approaches a compelling choice for trajectory generation. By iteratively denoising trajectories with carefully learned noise schedules and conditioning mechanisms, diffusion models not only ensure smooth and consistent motion but also flexibly adapt to the task context. In experiments, our method improves trajectory continuity, maintains high coverage, and generalizes to unseen shapes, paving the way for unified end-to-end trajectory learning across industrial surface-processing tasks without category-specific models. On average, our approach improves Point-wise Chamfer Distance by 98.2\% and smoothness by 97.0\%, while increasing surface coverage by 61\% compared to prior methods. The link to our code can be found \href{https://anonymous.4open.science/r/spraydiffusion_ral-2FCE/README.md}{here}.


MaskPlanner: Learning-Based Object-Centric Motion Generation from 3D Point Clouds

Tiboni, Gabriele, Camoriano, Raffaello, Tommasi, Tatiana

arXiv.org Artificial Intelligence

Object-Centric Motion Generation (OCMG) plays a key role in a variety of industrial applications$\unicode{x2014}$such as robotic spray painting and welding$\unicode{x2014}$requiring efficient, scalable, and generalizable algorithms to plan multiple long-horizon trajectories over free-form 3D objects. However, existing solutions rely on specialized heuristics, expensive optimization routines, or restrictive geometry assumptions that limit their adaptability to real-world scenarios. In this work, we introduce a novel, fully data-driven framework that tackles OCMG directly from 3D point clouds, learning to generalize expert path patterns across free-form surfaces. We propose MaskPlanner, a deep learning method that predicts local path segments for a given object while simultaneously inferring "path masks" to group these segments into distinct paths. This design induces the network to capture both local geometric patterns and global task requirements in a single forward pass. Extensive experimentation on a realistic robotic spray painting scenario shows that our approach attains near-complete coverage (above 99%) for unseen objects, while it remains task-agnostic and does not explicitly optimize for paint deposition. Moreover, our real-world validation on a 6-DoF specialized painting robot demonstrates that the generated trajectories are directly executable and yield expert-level painting quality. Our findings crucially highlight the potential of the proposed learning method for OCMG to reduce engineering overhead and seamlessly adapt to several industrial use cases.


Solve paint color effect prediction problem in trajectory optimization of spray painting robot using artificial neural network inspired by the Kubelka Munk model

Wang, Hexiang, Bi, Zhiyuan, Cheng, Zhen, Li, Xinru, Zhu, Jiake, Jiang, Liyuan, Li, Hao, Lu, Shizhou

arXiv.org Artificial Intelligence

Currently, the spray-painting robot trajectory planning technology aiming at spray painting quality mainly applies to single-color spraying. Conventional methods of optimizing the spray gun trajectory based on simulated thickness can only qualitatively reflect the color distribution, and can not simulate the color effect of spray painting at the pixel level. Therefore, it is not possible to accurately control the area covered by the color and the gradation of the edges of the area, and it is also difficult to deal with the situation where multiple colors of paint are sprayed in combination. To solve the above problems, this paper is inspired by the Kubelka-Munk model and combines the 3D machine vision method and artificial neural network to propose a spray painting color effect prediction method. The method is enabled to predict the execution effect of the spray gun trajectory with pixel-level accuracy from the dimension of the surface color of the workpiece after spray painting. On this basis, the method can be used to replace the traditional thickness simulation method to establish the objective function of the spray gun trajectory optimization problem, and thus solve the difficult problem of spray gun trajectory optimization for multi-color paint combination spraying. In this paper, the mathematical model of the spray painting color effect prediction problem is first determined through the analysis of the Kubelka-Munk paint film color rendering model, and at the same time, the spray painting color effect dataset is established with the help of the depth camera and point cloud processing algorithm. After that, the multilayer perceptron model was improved with the help of gating and residual structure and was used for the color prediction task. To verify ...


PaintNet: Unstructured Multi-Path Learning from 3D Point Clouds for Robotic Spray Painting

Tiboni, Gabriele, Camoriano, Raffaello, Tommasi, Tatiana

arXiv.org Artificial Intelligence

Popular industrial robotic problems such as spray painting and welding require (i) conditioning on free-shape 3D objects and (ii) planning of multiple trajectories to solve the task. Yet, existing solutions make strong assumptions on the form of input surfaces and the nature of output paths, resulting in limited approaches unable to cope with real-data variability. By leveraging on recent advances in 3D deep learning, we introduce a novel framework capable of dealing with arbitrary 3D surfaces, and handling a variable number of unordered output paths (i.e. unstructured). Our approach predicts local path segments, which can be later concatenated to reconstruct long-horizon paths. We extensively validate the proposed method in the context of robotic spray painting by releasing PaintNet, the first public dataset of expert demonstrations on free-shape 3D objects collected in a real industrial scenario. A thorough experimental analysis demonstrates the capabilities of our model to promptly predict smooth output paths that cover up to 95% of previously unseen object surfaces, even without explicitly optimizing for paint coverage.


Video Friday: Ladder-Climbing Snake Robot, and More

IEEE Spectrum Robotics

IROS has just ended in Spain but our coverage continues and we'll be bringing you more stories over the next week or two. Today we have a special edition of Video Friday, featuring some of the best videos from the conference. Next week, Video Friday returns to its normal format, so if you have video suggestions, keep them coming as usual. In this paper, we propose a 3D walking and skating motion generation method to achieve sequential walking and skating motion with skateboard and roller skate. For gener- ating the sequential stable skating motion using passive wheel, we must deal with non-coplanar contacts with anisotropic friction.


Move over Banksy! Robotic spray reproduces photos as giant murals

AITopics Original Links

If you have ever dreamed of becoming a graffiti artist like Banksy, a robotic spray could help you cheat your way to greatness. The prototype gadget attaches to a can of paint and helps a novice replicate a photograph on a large canvas or wall. The invention could spawn a graffiti revolution, but is more likely to be used to create digital characters for films, for example. A robotic spray can is able to help a novice replicate a photograph on a large canvas or wall. This image shows the'target' photograph, top inset, simulated image (bottom inset) and finished painting, main Scientists from the ETH Zurich, Disney Research Zurich, Dartmouth College and Columbia University came up with the smart spray can and have demoed its artistic abilities in an impressive video.

  Country: Europe > Switzerland > Zürich > Zürich (0.49)
  Genre: Research Report (0.53)